|
‘’’Quality-of-Data (QoD)’'' 〔Designation coined by L. Veiga. | url=https://en.wikipedia.org/wiki/Vector-field_consistency〕 specifies and describes the required Quality of Service of a distributed storage system from the Consistency〔url=https://en.wikipedia.org/wiki/Consistency_model〕 point of view of its data. . It can be used to support Big Data management frameworks, Workflow management, and HPC systems (mainly for data replication and consistency). It takes into account data semantics, namely Time interval of data freshness, Sequence of tolerable number of outstanding versions of the data read before refresh, and Value divergence allowed before displaying it. Initially it was based in a model from an existing research work regarding vector-field Consistency, awarded the best-paper prize in the ACM/IFIP/Usenix Middleware Conference 2007 and later enhanced for increased scalability and fault-tolerance. This consistency model has been successfully applied and proven in Big Data key/value store Apache HBase,〔url=https://hbase.apache.org〕 initially designed as a middleware module seating between clusters from separate data centres. The HBase-QoD coupling minimises bandwidth usage and optimises resources allocation during replication achieving the desired consistency level at a more fine-grained level. QoD is defined by the three-dimensions of vector k=(θ,σ,ν), but with a broader view of the issue, applicable also to large-scale data management techniques in regards to their timely delivery.〔url=http://www-01.ibm.com/software/data/quality/〕 == Other Descriptions == Quality-of-Data should not be confused with other definitions for Data Quality such as - Completeness - Validity - Accuracy 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Quality of Data (QoD)」の詳細全文を読む スポンサード リンク
|